Accurate Error Bounds for the Eigenvalues of the Kernel Matrix

نویسنده

  • Mikio L. Braun
چکیده

The eigenvalues of the kernel matrix play an important role in a number of kernel methods, in particular, in kernel principal component analysis. It is well known that the eigenvalues of the kernel matrix converge as the number of samples tends to infinity. We derive probabilistic finite sample size bounds on the approximation error of individual eigenvalues which have the important property that the bounds scale with the eigenvalue under consideration, reflecting the actual behavior of the approximation errors as predicted by asymptotic results and observed in numerical simulations. Such scaling bounds have so far only been known for tail sums of eigenvalues. Asymptotically, the bounds presented here have a slower than stochastic rate, but the number of sample points necessary to make this disadvantage noticeable is often unrealistically large. Therefore, under practical conditions, and for all but the largest few eigenvalues, the bounds presented here form a significant improvement over existing non-scaling bounds.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Eigenvalues Ratio for Kernel Selection of Kernel Methods

The selection of kernel function which determines the mapping between the input space and the feature space is of crucial importance to kernel methods. Existing kernel selection approaches commonly use some measures of generalization error, which are usually difficult to estimate and have slow convergence rates. In this paper, we propose a novel measure, called eigenvalues ratio (ER), of the ti...

متن کامل

Some remarks on the sum of the inverse values of the normalized signless Laplacian eigenvalues of graphs

Let G=(V,E), $V={v_1,v_2,ldots,v_n}$, be a simple connected graph with $%n$ vertices, $m$ edges and a sequence of vertex degrees $d_1geqd_2geqcdotsgeq d_n>0$, $d_i=d(v_i)$. Let ${A}=(a_{ij})_{ntimes n}$ and ${%D}=mathrm{diag }(d_1,d_2,ldots , d_n)$ be the adjacency and the diagonaldegree matrix of $G$, respectively. Denote by ${mathcal{L}^+}(G)={D}^{-1/2}(D+A) {D}^{-1/2}$ the normalized signles...

متن کامل

Asymptotic error bounds for kernel-based Nystrm low-rank approximation matrices

• Many kernel-based learning algorithms have the computational load. • The Nyström low-rank approximation is designed for reducing the computation. • We propose the spectrum decomposition condition with a theoretical justification. • Asymptotic error bounds on eigenvalues and eigenvectors are derived. • Numerical experiments are provided for covariance kernel and Wishart matrix. AMS subject cla...

متن کامل

Sharp Bounds on the PI Spectral Radius

In this paper some upper and lower bounds for the greatest eigenvalues of the PI and vertex PI matrices of a graph G are obtained. Those graphs for which these bounds are best possible are characterized.

متن کامل

APPLICATION OF THE RANDOM MATRIX THEORY ON THE CROSS-CORRELATION OF STOCK ‎PRICES

The analysis of cross-correlations is extensively applied for understanding of interconnections in stock markets. Variety of methods are used in order to search stock cross-correlations including the Random Matrix Theory (RMT), the Principal Component Analysis (PCA) and the Hierachical ‎Structures.‎ In ‎this work‎, we analyze cross-crrelations between price fluctuations of 20 ‎company ‎stocks‎...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Journal of Machine Learning Research

دوره 7  شماره 

صفحات  -

تاریخ انتشار 2006